1 |
A Feasibility Study of Answer-Agnostic Question Generation for Education ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
BiSECT: Learning to Split and Rephrase Sentences with Bitexts ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
"Wikily" Supervised Neural Translation Tailored to Cross-Lingual Tasks ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Is "my favorite new movie" my favorite movie? Probing the Understanding of Recursive Noun Phrases ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Wikily Supervised Neural Translation Tailored to Cross-Lingual Tasks ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
BiSECT: Learning to Split and Rephrase Sentences with Bitexts ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Resolving Pronouns in Twitter Streams: Context can Help! ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Artificial Intelligence in mental health and the biases of language based models
|
|
|
|
In: PLoS One (2020)
|
|
BASE
|
|
Show details
|
|
14 |
Winter is here: summarizing Twitter streams related to pre-scheduled events
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Bilingual is At Least Monolingual (BALM): A Novel Translation Algorithm that Encodes Monolingual Priors ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Seeing Things from a Different Angle: Discovering Diverse Perspectives about Claims ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Complexity-Weighted Loss and Diverse Reranking for Sentence Simplification ...
|
|
|
|
Abstract:
Sentence simplification is the task of rewriting texts so they are easier to understand. Recent research has applied sequence-to-sequence (Seq2Seq) models to this task, focusing largely on training-time improvements via reinforcement learning and memory augmentation. One of the main problems with applying generic Seq2Seq models for simplification is that these models tend to copy directly from the original sentence, resulting in outputs that are relatively long and complex. We aim to alleviate this issue through the use of two main techniques. First, we incorporate content word complexities, as predicted with a leveled word complexity model, into our loss function during training. Second, we generate a large set of diverse candidate simplifications at test time, and rerank these to promote fluency, adequacy, and simplicity. Here, we measure simplicity through a novel sentence complexity model. These extensions allow our models to perform competitively with state-of-the-art systems while generating simpler ... : 11 pages, North American Association of Computational Linguistics (NAACL 2019) ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.1904.02767 https://arxiv.org/abs/1904.02767
|
|
BASE
|
|
Hide details
|
|
18 |
Comparison of Diverse Decoding Methods from Conditional Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Paraphrase-Sense-Tagged Sentences
|
|
|
|
In: Transactions of the Association for Computational Linguistics, Vol 7, Pp 714-728 (2019) (2019)
|
|
BASE
|
|
Show details
|
|
20 |
Learning translations via images with a massively multilingual image dataset
|
|
|
|
BASE
|
|
Show details
|
|
|
|